Download Confluent Certified Developer for Apache Kafka Certification Examination.CCDAK.PracticeTest.2025-05-12.79q.vcex

Vendor: Confluent
Exam Code: CCDAK
Exam Name: Confluent Certified Developer for Apache Kafka Certification Examination
Date: May 12, 2025
File Size: 46 KB

How to open VCEX files?

Files with VCEX extension can be opened by ProfExam Simulator.

ProfExam Discount

Demo Questions

Question 1
A client connects to a broker in the cluster and sends a fetch request for a partition in a topic. It gets an exception Not Leader For Partition Exception in the response. How does client handle this situation?
  1. Get the Broker id from Zookeeper that is hosting the leader replica and send request to it
  2. Send metadata request to the same broker for the topic and select the broker hosting the leader replica
  3. Send metadata request to Zookeeper for the topic and select the broker hosting the leader replica
  4. Send fetch request to each Broker in the cluster
Correct answer: B
Explanation:
In case the consumer has the wrong leader of a partition, it will issue a metadata request. The Metadata request can be handled by any node, so clients know afterwards which broker are the designated leader for the topic partitions. Produce and consume requests can only be sent to the node hosting partition leader.
In case the consumer has the wrong leader of a partition, it will issue a metadata request. The Metadata request can be handled by any node, so clients know afterwards which broker are the designated leader for the topic partitions. Produce and consume requests can only be sent to the node hosting partition leader.
Question 2
To continuously export data from Kafka into a target database, I should use
  1. Kafka Producer
  2. Kafka Streams
  3. Kafka Connect Sink
  4. Kafka Connect Source
Correct answer: C
Explanation:
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.
Kafka Connect Sink is used to export data from Kafka to external databases and Kafka Connect Source is used to import from external databases into Kafka.
Question 3
A consumer starts and has auto.offset.reset=none, and the topic partition currently has data for offsets going from 45 to 2311. The consumer group has committed the offset 10 for the topic before. Where will the consumer read from?
  1. offset 45
  2. offset 10
  3. it will crash
  4. offset 2311
Correct answer: C
Explanation:
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45
auto.offset.reset=none means that the consumer will crash if the offsets it's recovering from have been deleted from Kafka, which is the case here, as 10 < 45
Question 4
What exceptions may be caught by the following producer? (select two) 
ProducerRecord<String, String> record =
new ProducerRecord<>("topic1", "key1", "value1"); try {
producer.send(record);
} catch (Exception e) { e.printStackTrace();
}
  1. BrokerNotAvailableException
  2. SerializationException
  3. InvalidPartitionsException
  4. BufferExhaustedException
Correct answer: BD
Explanation:
These are the client side exceptions that may be encountered before message is sent to the broker, and before a future is returned by the .send() method.
These are the client side exceptions that may be encountered before message is sent to the broker, and before a future is returned by the .send() method.
Question 5
When using the Confluent Kafka Distribution, where does the schema registry reside?
  1. As a separate JVM component
  2. As an in-memory plugin on your Zookeeper cluster
  3. As an in-memory plugin on your Kafka Brokers
  4. As an in-memory plugin on your Kafka Connect Workers
Correct answer: A
Explanation:
Schema registry is a separate application that provides RESTful interface for storing and retrieving Avro schemas.
Schema registry is a separate application that provides RESTful interface for storing and retrieving Avro schemas.
Question 6
What Java library is KSQL based on?
  1. Kafka Streams
  2. REST Proxy
  3. Schema Registry
  4. Kafka Connect
Correct answer: A
Explanation:
KSQL is based on Kafka Streams and allows you to express transformations in the SQL language that get automatically converted to a Kafka Streams program in the backend
KSQL is based on Kafka Streams and allows you to express transformations in the SQL language that get automatically converted to a Kafka Streams program in the backend
Question 7
How will you find out all the partitions where one or more of the replicas for the partition are not in-sync with the leader?
  1. kafka-topics.sh --bootstrap-server localhost:9092 --describe --unavailable- partitions
  2. kafka-topics.sh --zookeeper localhost:2181 --describe --unavailable- partitions
  3. kafka-topics.sh --broker-list localhost:9092 --describe --under-replicated-partitions
  4. kafka-topics.sh --zookeeper localhost:2181 --describe --under-replicated-partitions
Correct answer: D
Question 8
What is a generic unique id that I can use for messages I receive from a consumer?
  1. topic + partition + timestamp
  2. topic + partition + offset
  3. topic + timestamp
Correct answer: B
Explanation:
(Topic,Partition,Offset) uniquely identifies a message in Kafka
(Topic,Partition,Offset) uniquely identifies a message in Kafka
Question 9
Your streams application is reading from an input topic that has 5 partitions. You run 5 instances of your application, each with num.streams.threads set to 5. How many stream tasks will be created and how many will be active?
  1. 5 created, 1 active
  2. 5 created, 5 active
  3. 25 created, 25 active
  4. 25 created, 5 active
Correct answer: D
Explanation:
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created
One partition is assigned a thread, so only 5 will be active, and 25 threads (i.e. tasks) will be created
Question 10
The kafka-console-consumer CLI, when used with the default options
  1. uses a random group id
  2. always uses the same group id
  3. does not use a group id
Correct answer: A
Explanation:
If a group is not specified, the kafka-console-consumer generates a random consumer group.
If a group is not specified, the kafka-console-consumer generates a random consumer group.
Question 11
A Kafka producer application wants to send log messages to a topic that does not include any key. What are the properties that are mandatory to configure for the producer configuration? (select three)
  1. bootstrap.servers
  2. partition
  3. key.serializer
  4. value.serializer
  5. key
  6. value
Correct answer: ACD
Explanation:
Both key and value serializer are mandatory.
Both key and value serializer are mandatory.
HOW TO OPEN VCE FILES

Use VCE Exam Simulator to open VCE files
Avanaset

HOW TO OPEN VCEX AND EXAM FILES

Use ProfExam Simulator to open VCEX and EXAM files
ProfExam Screen

ProfExam
ProfExam at a 20% markdown

You have the opportunity to purchase ProfExam at a 20% reduced price

Get Now!